126 research outputs found

    Incentives and co-evolution: Steering linear dynamical systems with noncooperative agents

    Full text link
    Modern socio-technical systems typically consist of many interconnected users and competing service providers, where notions like market equilibrium are tightly connected to the ``evolution'' of the network of users. In this paper, we model the users' dynamics as a linear dynamical system, and the service providers as agents taking part to a generalized Nash game, whose outcome coincides with the input of the users' dynamics. We thus characterize the notion of co-evolution of the market and the network dynamics and derive dissipativity-based conditions leading to a pertinent notion of equilibrium. We then focus on the control design and adopt the light-touch policy to incentivize or penalize the service providers as little as possible, while steering the networked system to a desirable outcome. We also provide a dimensionality-reduction procedure, which offers network-size independent conditions. Finally, we illustrate our novel notions and algorithms on a simulation setup stemming from digital market regulations for influencers, a topic of growing interest

    A learning-based approach to multi-agent decision-making

    Full text link
    We propose a learning-based methodology to reconstruct private information held by a population of interacting agents in order to predict an exact outcome of the underlying multi-agent interaction process, here identified as a stationary action profile. We envision a scenario where an external observer, endowed with a learning procedure, is allowed to make queries and observe the agents' reactions through private action-reaction mappings, whose collective fixed point corresponds to a stationary profile. By adopting a smart query process to iteratively collect sensible data and update parametric estimates, we establish sufficient conditions to assess the asymptotic properties of the proposed learning-based methodology so that, if convergence happens, it can only be towards a stationary action profile. This fact yields two main consequences: i) learning locally-exact surrogates of the action-reaction mappings allows the external observer to succeed in its prediction task, and ii) working with assumptions so general that a stationary profile is not even guaranteed to exist, the established sufficient conditions hence act also as certificates for the existence of such a desirable profile. Extensive numerical simulations involving typical competitive multi-agent control and decision making problems illustrate the practical effectiveness of the proposed learning-based approach

    Reliably-stabilizing piecewise-affine neural network controllers

    Full text link
    A common problem affecting neural network (NN) approximations of model predictive control (MPC) policies is the lack of analytical tools to assess the stability of the closed-loop system under the action of the NN-based controller. We present a general procedure to quantify the performance of such a controller, or to design minimum complexity NNs with rectified linear units (ReLUs) that preserve the desirable properties of a given MPC scheme. By quantifying the approximation error between NN-based and MPC-based state-to-input mappings, we first establish suitable conditions involving two key quantities, the worst-case error and the Lipschitz constant, guaranteeing the stability of the closed-loop system. We then develop an offline, mixed-integer optimization-based method to compute those quantities exactly. Together these techniques provide conditions sufficient to certify the stability and performance of a ReLU-based approximation of an MPC control law
    • …
    corecore